perm filename SNAPOV.TXT[W80,JMC] blob sn#502017 filedate 1980-03-30 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00016 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00003 00002		 Design of the New Problem Solver as of 31 March, 1980 --
C00010 00003	  II.  Motivation, Design Goals, Relation to FOL, and Basic Approach
C00019 00004	 III.  Problematic Goal Seeking, Goal Refinement, and Achievement Planning
C00036 00005	  IV.  A Really General Problem Solver, based on a general scheme for the
C00042 00006	  IV.B   A General Scheme for the Classification and Structuring of 
C00050 00007	  IV.C   The Acquisition of Specialized Heuristic Knowledge for Planning
C00052 00008	  IV.D   The Use of Specialized Knowledge in a Truly General Problem Solver
C00058 00009	   V.  Toward a General, Human-Like, Artificial Intelligence
C00066 00010	  VI.  The Recognition and Analysis of Human Beliefs and Intentions
C00069 00011	 VII.  Advice Taking and Advice Seeking: Theory, and Applications to
C00082 00012	VIII.  The Organizational Structure of Component Problem Solvers
C00094 00013	  IX.  The Frame and Qualification Problems
C00116 00014	   X.  The First Step:  A Knowledge-Based, Hierarchical Travel Planner
C00121 00015	  XI.  Extension to an Advice-Taking Travel Planner
C00123 00016	 XII.  References
C00130 ENDMK
C⊗;
	 Design of the New Problem Solver as of 31 March, 1980 --
		     A Current Snapshot and Overview

  STANFORD ARTIFICIAL INTELLIGENCE LABORATORY -- FORMAL REASONING GROUP

Lewis G. Creary 		    This page last revised:   23 Mar 1980

   I.  Introduction

In Section 2.4.9 of our most recent proposal to ARPA, a goal was set for
completion of a preliminary study of the new problem solver and a decision
on the general direction for its design by April, 1980.  This goal will be
met; in this informal interim report we shall sketch the developing new
design and the main ideas on which it is based.  We wish to emphasize at the
outset that this report does not contain detailed program specifications
(these will come later this year); rather, it surveys the high-level
research goals and design ideas on which the detailed specifications will be
based.  Also, it should be kept in mind that the material presented here is
still to some extent tentative; this report constitutes a stop-action
snapshot of a set of ideas that is still undergoing active development.

Our new problem solver is being designed as a general-purpose, advice-
taking, knowledge-based system for planning and design.  It is intended to
provide an ongoing software environment for the development and testing of
programs and theories within the Formal Reasoning Group.  The major AI themes
that we plan to develop within this system (and that to some extent are being
addressed in the initial design) are the following:

   1.  A general framework for the structuring and use of specialized
       knowledge as the foundation for a truly general problem solver.

   2.  A truly general knowledge-based problem solver (with general
       motivational system, including time-driven task agenda) as the
       basis for a general, human-like, artificial intelligence.

   3.  Simulative use of the program's own human-like thought mechanisms
       as a means of recognizing and analyzing the beliefs, desires,
       plans, and intentions of people and other intelligent organisms.
DOUBT.1 - Simulation isn't adequate for understanding the mental states
of others, because the information for simulation isn't available, and
the questions whose answers are wanted are often not simply obtainable
from simulation.

   4.  Advice taking as a means of knowledge acquisition and program
       development.

   5.  Various kinds of learning as further means of knowledge acquisition.

   6.  The logical (and conceptual) analysis of causal reasoning as a
       basis for solution of the frame and qualification problems.

Though we intend eventually (and are designing now) to create a software
environment that can serve for about ten years of research and development,
our initial implementation effort will be directed toward the construction
of a single, knowledge-based, hierarchical planner that deals with an
expanded version of the problem domain of the original advice taker design
-- travel planning and related communications planning.  As our work
progresses, the control structure of this initial program will be expanded
and generalized in accordance with the design now being developed, and its
domain enlarged to include problems of plan recognition and analysis.  It
is hoped that the resulting system will eventually be able to function in
some non-trivial ways as an intelligent assistant to a (human) military
intelligence analyst, in roughly the manner sketched at the beginning of
Section 2.2 of our most recent proposal.

The remaining sections of this interim report will discuss in greater detail
some of the ideas that we have sketched in this introduction.  In order to
make this report relatively self-contained, we include a section that
reproduces most of the material in Section 2.4.9 of our last proposal,
concerning the basic motivation, design goals, etc. for the problem solver
project.
  II.  Motivation, Design Goals, Relation to FOL, and Basic Approach

                              This Page Last Revised:   18 Mar  1980

                       THE NEW PROBLEM SOLVER

                             Motivation

	In general, we wish to keep our thinking in touch with the
prime criterion of value for ideas in artificial intelligence -- their
contribution to the successful performance of problem-solving, question-
answering, and learning programs.

	More specifically, we wish to be able to test realistically
(and, when possible, to demonstrate) the problem-solving power of ideas,
approaches, and techniques developed within the AI & Formal Reasoning
Group.

	Still more specifically, we wish to investigate the heuristic
adequacy of some ideas recently developed by McCarthy and by Creary
concerning:
   a) representation of, and reasoning with, facts about knowledge,
      belief, and desire
   b) analysis of the concept of epistemic feasibility
   c) the form of some fallible, but highly plausible, reasoning
   d) non-trivial advice taking.

	A related, but somewhat different, motivation is the desire to
determine where the weakest threads are in the fabric of current AI
heuristic technology -- especially in relation to the solution of
problems of the type characteristically investigated by our group.

 
                            Design Goals

  I.  The problem solver should provide a convenient and effective test
      bed and development tool for the research conducted by members  of
      the AI & Formal Reasoning Group.  Accordingly, it should emphasize
      (but not overemphasize) the role of logic in problem solving, and
      should facilitate the evaluation and improvement of ideas and
      techniques developed within the group.  The system should be
      sufficiently broad and flexible to accommodate differences of
      opinion within the group concerning how best to approach a given
      problem.  No member of the group should come to feel that the
      system unduly constrains the forms of proposed solutions to the
      problems of interest to him.

 II.  The design should reflect the present state of the art in problem
      solving.  We don't want to re-invent anything, or (still worse) to
      have readily avoidable deficiencies in the program.  We want to
      provide the best possible problem-solving environment in which to
      try out our ideas.

III.  The problem solver should be readily revisable, in order to
      facilitate and encourage experimentation with a number of
      different problem-solving designs.  This ease of revision is
      essential if the problem solver is to function over a significant
      period of time as a truly useful research tool.  The way to
      achieve this goal is to design the program in such a way that
      those conceptually independent features of it which might be
      subject to change (memory retrieval algorithm, problem solving
      control structure, etc.,) are variable independently of one
      another in the program, and are represented by easily variable
      program elements such as production rules, parameter tables, etc.
      The implication of this approach is that we should try to design,
      not a particular problem solver, but a system or framework for
      problem solving that can be "programmed" to instantiate any of a
      large number of particular problem solvers.  Concern with this
      issue need not bog us down in system building, however; it should
      be possible to start by writing a particular problem solver with
      built-in generalization points.  The important thing is to keep
      this design goal in mind from the beginning.


                           Relation to FOL

	The elementary parts of FOL and the proposed new problem solver
are the tools of different, complementary approaches to research in
artificial intelligence.  In its simplest form, FOL may be viewed as the
proof-checking component in McCarthy and Hayes's [1969] "Missouri program"
(MP), and as such, is a tool of epistemological research; such research
may (and probably should) be carried quite some distance in abstraction
from heuristic issues before attempts are made to implement ideas in a
full problem-solving context.  On the other hand, the proposed new problem
solver is a version of McCarthy and Hayes's "reasoning program" (RP), and,
in the presence of basic FOL, can be reserved for study of the primarily
heuristic aspects of a given class of problems.  It is to be expected that
research economies will be achieved by using the two programs thus in
tandem, since it will probably be cheaper to check the epistemological
adequacy of ideas with FOL than with the problem solver.  Moreover, this
division of labor will permit individual researchers to focus closely on
manageable subparts of a given AI problem.


                           Basic Approach

	Our basic approach will be to use state-of-the-art AI techniques
to construct an implementation of John McCarthy's "Advice Taker" program
more or less along the lines first laid down in his paper "Programs with
Common  Sense"  (1959),  and  extended  in McCarthy and Hayes (1969) and
subsequent papers by McCarthy.  A preliminary review of the Advice-Taker
design  in  the light of currently available AI techniques suggests that
the  basic  design  is  sound,  and  that  prospects  for  a  successful
implementation are good.

 III.  Problematic Goal Seeking, Goal Refinement, and Achievement Planning

					This page last revised: 20 Mar 1980

In order to provide some perspective on the design of a truly general
problem solver, we first consider what a <problem> is, and what it is to
solve one.  Problem solving is a specific kind of goal seeking:  namely,
that which is problematic, but successful.  The implication of success comes
from the achievement verb 'solving'; this implication is sometimes weakened
or even cancelled by explicit or implicit embedding of the phrase 'problem
solving' within the more complex phrase 'problem solving activity', which
may connote activity for which there is merely some intention, expectation,
or hope of success.

The concept of success in goal seeking is relatively clear, and requires
little explication; success occurs when the goal seeker actually achieves
the goal sought, or, in the case of problematic goal seeking, when the
problem solver finally solves its problem.  The general concept of a
problem, however, is not initially quite so transparent, and requires some
clarification.  In 1959, Newell, Shaw, and Simon characterized problematic
situations as follows:  "A problem exists whenever a problem solver desires
some outcome or state of affairs that he does not immediately know how to
attain.  Imperfect knowledge about how to proceed is at the core of the
genuinely problematic."  In 1972, Newell and Simon expanded thus on the
earlier idea:  "A person is confronted with a problem when he wants
something and does not know immediately what series of actions he can
perform to get it.  The desired object may be very tangible (an apple to
eat) or abstract (an elegant proof for a theorem).  It may be specific (that
particular apple over there) or quite general (something to appease hunger).
...  The actions involved in obtaining desired objects include physical
actions (walking, reaching, writing), perceptual activities (looking,
listening), and purely mental activities (judging the similarity of two
symbols, remembering a scene, and so on)."

The penetrating observations just quoted identify one of two characteristic
features of problematic situations -- the lack of an immediately available
procedure for the attainment of a goal.  The other main characteristic of
problematic situations, which Newell and Simon do not explicitly identify,
is the lack of a goal that is sufficiently definite to permit the choice or
construction of an appropriate direct procedure for achieving it.*  Consider,
for example, the case of a small boy with a sweet tooth and a pocket full of
change, who is hesitating before a well-stocked shelf in a candy store.  The
boy, we suppose, wants a candy bar, but doesn't yet know just what kind of
candy bar he wants.  Clearly, the boy is faced with a problem, but the
problem is not that of finding or constructing a procedure.  Rather, his
problem is one of <<goal refinement>.  Once he decides on a particular candy
bar, his choice of a procedure to obtain it will be entirely unproblematic
and direct.

--------------------
  *My attention was first drawn to the importance of this second
characteristic in a conversation with John McCarthy, who has for several
years expounded the slogan "Design precedes construction planning" in his
AI courses.  I believe that McCarthy's distinction between "design" and
"construction planning" is of quite general significance for problem
solving, and so have renamed his two concepts "goal refinement" and
"achievement planning", respectively.
--------------------

There are, then, not just one, but two, characteristic features of
problematic situations:  i) lack of sufficient definiteness in a goal, and
ii) lack of an immediately available procedure for the attainment of a goal.
In a given problematic situation, one or the other of these characteristics
may be attenuated or even absent, but at least one of them must always be
present.  A situation without either characteristic is unproblematic.  Not
surprisingly, these two ways in which situations may be problematic
determine the two main kinds of problem solving, which we shall refer to as
<<goal refinement> and <<achievement planning>.  These are both forms of
<<design>, in the former case, design of a state of affairs to be achieved,
and in the latter case, design of a procedure for achieving a desired state
of affairs.  (Problems that ordinarily are described as being concerned
with the design of <<objects> or <<object types> are best regarded from a
theoretical standpoint as subproblems in the design of states of affairs.)
Some simple problems require only goal refinement for their solution, and
others require only achievement planning, but in general both types of
problem solving are necessary.

As an example, let us consider the case of a man who wants a new chair for his
study.  This desire may pose a problem for him, either fairly simple or
quite complicated, depending on the surrounding circumstances.  If the man
is rich, and isn't too fussy about the kind of chair he ends up with, he may
be able to avoid all serious problem solving simply by telling his butler to
get him an appropriate chair.  But if the man is firmly resolved to obtain a
chair of a highly specific and unusual style and quality, and if he cannot
obtain the services of a suitably skilled procurer of chairs, then he must
himself engage in non-trivial problem solving, which in this case may
consist exclusively of achievement planning.  On the other hand, if the
man's circumstances dictate that he select a chair of a given (approximate)
price from an available set of mail-order catalogs, then his problem, like
that of the boy in the candy store, is primarily one of goal refinement.
Much more difficult and interesting, however, are cases in which the man has
a highly refined taste in furniture, considerable ability as a furniture
procurer <<and> as a furniture maker, substantial (but still limited)
resources of time and money, but only a rather vague idea of the type of
chair he wants.  In such a case, the man may have to engage in <<combined>
goal refinement and achievement planning.  Moreover, he will not be able
reasonably to treat these two aspects of his problem as independent
subproblems, since the specific type of chair on which he finally decides
should depend, not only on his tastes and desires, but also on the means
available for satisfying them, together with the associated costs.

Moving from academic examples to implemented ones, a very interesting
(though relatively uncomplicated) interaction between goal refinement and
achievement planning occurs during a particular design episode in the "life"
of Mark Stefik's hierarchical MOLGEN planner, which Stefik reports in his
Ph. D. thesis [Stefik 1980].  The problem dealt with was the design of a
gene-cloning experiment in molecular genetics.  More specifically, the
problem was to plan a genetics laboratory procedure that would produce a
culture of bacteria carrying the gene for rat insulin on a vector.  (A
<<vector> is a self-replicating DNA molecule that can be used to transmit
genes into bacteria).  While the goal state was thus specified only
abstractly (any combination of bacteria type and vector related in the
indicated way was acceptable), any particular goal-producing procedure
would, of course, yield a particular combination of bacteria type and
vector.  

The planning process was driven mainly by elimination of combinatorial
alternatives through the propagation of constraints on the objects in the
planning situation.  As planning progressed, it became necessary to
introduce three abstractly specified objects into the plan situation that
were not mentioned in the original goal specification: an antibiotic, an
enzyme, and a linker.

Thus far, we have seen that the characterization of problematic goal-seeking
provided by Newell, Shaw, and Simon is incomplete, and we have in effect
offered the following corrected version of Newell and Simon's 1972
formulation:
    An organism is confronted with a <<problem> when it has a goal and
    either i) the goal is not sufficiently definite, or ii) the organism
    does not immediately know what program of actions to perform in order
    to achieve the goal. [this may need revision]

[At this point, we need to discuss more analytically the factors that
determine the interactions (if any) between goal refinement and achievement
planning in a particular problem, and that may cause one or the other to
become the primary focus of problem-solving activity.  The rough notes below
may or may not be useful for this purpose.]

------------------------------
------------------------------
When a specific goal proposition is given to a general goal-seeking process,
the goal proposition will determine a class G of possible worlds (those in
which it is true) such that if any member of G were to become the actual
world, the goal would be attained.  In general, the goal proposition will
provide only an abstract specification of what is wanted, whereas any
particular member of G will in principle be describable with any given
degree of specificity or detail.  For the purposes of goal seeking, we need
to introduce a descriptive language L for the members of G that is just
detailed enough to make all the distinctions that might be relevant for the
attainment of the goal.  That is, the language L should generate a set of
state descriptions (in the sense of Carnap's Meaning and Necessity) that
partitions the members of G into classes that are just small enough to
insure that the members of each are indistinguishable from one another in
any way that would have relevance to the attainment of the goal.  Thus, in
particular, within each partition cell the members should all have the same
value or utility with respect to the given goal.

Generally, the goal proposition will not be specific enough to permit the
application of ...

In the most general case, a goal seeker tries to achieve a certain goal
while satisfying certain constraints, starting from a presupposed set of
circumstances and standing conditions.  If the goal seeker has available to
it an executable procedure that it knows will achieve the goal under the
presupposed circumstances, and knows that it would find the particular
world-state that would thus be brought about more satisfying than it would
any other world-state that it could bring about, then the goal-seeker has no
problem, and may simply execute the procedure in question.
  IV.  A Really General Problem Solver, based on a general scheme for the
	structuring and use of specialized knowledge.

                              This Page Last Revised:   18 Mar  1980

  IV.A   An Approach to Generality

What we believe to be the best overall approach to the design of an expert,
generally intelligent problem-solving system was explicitly suggested in
[Feigenbaum, Buchanan, & Lederberg 1971].  While this approach to
generality was developed within the limited task domain of acyclic organic
molecules (which was called "DENDRAL's world"), the ideas are clearly of
much broader significance.  The main idea is that, since great proficiency
(i.e., expertise) in problem solving almost always requires specialized
methods and representations, the way to achieve expert performance without
sacrificing generality is to embed an appropriate group of specialist
problem solvers within a control framework that also offers access to very
general problem solving methods and representations.  When feasible,
appropriate specialists can be assigned to help with the solution of a given
problem; but when there is no appropriate specialist, recourse to general
problem solving methods is always available.  The idea is not merely that
one general, and n specialized, problem solvers are to be independently
juxtaposed behind a "big switch"; rather, the specialists are to be fully
integrated into a basic framework provided by the methods and
representations of the general problem solver.

We propose to adopt a generalization of this basic design for our problem
solving system.  Further details on the specific form we propose to give it
can be found in Sections IV.B and IV.C.

Though they were not as explicit about it as the authors cited above,
McCarthy & Hayes [1969] seem to have had in mind a somewhat similar design
in their discussion (pp. 467-468) of a proposed reasoning program (RP) of
the Advice-Taker type.  Their discussion, which is mainly concerned with
representations and not with methods, proposes that there be a general
declarative representation in the form of a suitable logical language, and
that it have a certain kind of primacy over other, more specialized,
representations that may also be available.  The main aspects of this
representational primacy were formulated as follows:

1.  All other data structures have linguistic descriptions that give the
    relations between the structures and what they tell about the world.

2.  The subroutines have linguistic descriptions that tell what they do,
    either internally manipulating data, or externally manipulating the world.

3.  The rules that express RP's beliefs about how the world behaves and that
    give the consequences of strategies are expressed linguistically.

4.  RP's goals, as given by the experimenter, its devised subgoals, [and] its
    opinion on its state of progress are all linguistically expressed.

In some ways, these requirements are quite strong idealizations of human
intelligence; it seems that McCarthy and Hayes's reasoning program would
know itself much better than any human knows himself.  We think that the
requirements, at least in the strong form presented, are best regarded as
a philosophical "ideal of reason" that would hold of an intelligence with
a perfect scientific understanding of its own mind and knowledge.  We do
feel the attraction of the ideal, however, and accordingly adopt for
practical purposes a somewhat weakened version of the requirements, as
follows:
   [list revised points 1'-4']
   [distinguish genetically programmed representations from others over
    which an organism might have more control.]

  IV.B   A General Scheme for the Classification and Structuring of 
	  Specialized Heuristic Knowledge

                               This page last revised:   23 Mar 1980

Our overall approach to problem solving is a generalization of the idea,
discussed in the previous section, that expertise and generality are best
combined by integrating an appropriate group of specialist problem solvers
into a basic framework provided by the methods and representations of a
general problem solver.  The generalization involved is this:  The basic
framework is to be provided, not by a single general problem solving method
such as generate-and-test, heuristic search, or even GPS, but rather by a
partially compatible variety of fairly general submethods, from which a very
wide range of problem-solving methods can be synthesized.  Each such
method will be augmentable in various ways by appropriate forms of specific
heuristic knowledge, which may greatly increase its power and effectiveness.
A particular method, thus augmented and appropriately coupled to a specific
class of problem representations, will constitute a specialist problem
solver.  

On the representational side of the basic framework is a primary language
consisting of a beefed-up first-order predicate calculus with identity,
having built-in facilities for treating as objects such things as operators,
actions, situations, concepts, propositions, sets, etc.  This primary
language is to serve both as a medium for "external" problem
representations, and as a fallback medium for computational purposes, to be
used when no more efficient specialized representation of a given problem is
known. This primary language is to be supplemented by a wide, open-ended
variety of specialized representational forms for computational use.

Thus, while our approach to generality is somewhat closer than the approach
suggested by Feigenbaum, Buchanan, & Lederberg to the "big-switch" model
mentioned by Ernst & Newell [1969, pp. 269ff.], it is still very much a
"unified" framework in the sense of the latter authors.

We view the framework proposed here in the light of Newell's taxonomy of
"weak methods" [Newell 1969, 1973].  We think that Newell's way of thinking
about problem solving methods is a very illuminating one, and that his ideas
along these lines can be developed significantly within our framework.
[describe his theory a little and say in what directions we plan to develop it.
We wish to deal with the transition from weak to strong methods through the
addition of heuristic knowledge.]

An integral part of this general framework will be a well-structured set of
facilities for accessing and utilizing an organized body of specialized
heuristic knowledge that is relevant to a given type of problem.  For the
special case of planning problems, the types of heuristic knowledge that
will be specifically provided for include at least the following:

  a. sets of relevant operations for problems of a given type.
  b. abstraction schemes for use with particular sets of operations.
  c. a set of abstract plan shemata that utilize the abstraction scheme of b.
  d. sets of plan refinement heuristics (production rules) for use in
     hierarchical planning within the abstraction scheme of b.
  e. a useful problem frame (or frame schema) for problems of a given type.
     (as we use the term, a <<problem frame> is a set of problem-relevant
     aspects or parameters that characterize a given situation for purposes
     of simulative prediction.)
  f. a set of parameterized specific plans utilizing only the operators of a.
  g. detailed records (i.e., specific memories) of past successful
     problem-solving episodes, including descriptions of the particular
     problem representations, algorithms, heuristics, etc. that were used.

some of these knowledge types presuppose some of the others ...

Our problem-solving framework will be so designed that all this knowledge
can be directly and easily fitted into it, thus transforming it into an
expert specialized problem solver.  However, the framework will still be
able to function in a more general mode without such knowledge when it is
not available.  In this latter case, the general program might either decide
to solve the problem at a given level of available heuristic knowledge, or
might try to obtain better knowledge with which to work.  In general, this
might be a non-trivial decision problem.  

This general framework for problem solving is to be implemented in the
programming formalism discussed in Section VIII.

  IV.C   The Acquisition of Specialized Heuristic Knowledge for Planning

                               This page last revised:   22 Mar 1980


Here are some approaches to acquisition of new planning heuristics that
that we expect to explore within the proposed framework:

  1.  Computation of heuristic planning knowledge from more general
      information stored in permanent memory.  (definitions, dictionaries,
      Peter Friedland's stuff, etc., also computation of criticality levels
      from ranges of options, etc.)

  2.  Acquisition of heuristic planning knowledge through advice taking.

  3.  Acquisition of heuristic planning knowledge for one specific case by
      analogous modification of such knowledge recalled from another
      specific case.  Cf. [McDermott 1979] and [Moore & Newell 1973].

  4.  Learning of heuristic planning knowledge through caching of
      appropriately selected and generalized results of previous planning
      activity.  [Give an account of the generalization process, and show its
      relation to the direct construction of a problem setup by analogy.]
      [discuss general caching policy; mention JMC's sketch of such a policy
      in the '59 paper.]
  IV.D   The Use of Specialized Knowledge in a Truly General Problem Solver

                               This page last revised:   22 Mar 1980

[a. discuss the choice and construction of a computational problem setup,
    which utilizes the knowledge framework discussed in Section IV.B.

 b. show how this interacts with the determination of knowledge needs in
    preparation for advice seeking.

 c. point out that a computational problem setup can be obtained directly
    from a remembered exemplar, by analogy.  This must be related to the
    exposition of learning of heuristic knowledge in point 3 of the
    preceding sub-section.]


Within the framework just described, problem solving can be either trivial
and habitual, non-trivial and demanding, or even truly creative, depending
on how much previously stored knowledge of the sorts a. - g. (Section IV.B)
can be brought to bear on a given problem, and how resourceful the problem
solver is in constructing new representations and/or methods for that
problem.  When presented with a request for a procedure suitable for
attaining a given goal, a problem solver will first try to retrieve from
memory either a procedure pre-compiled specifically for that goal, or a
paramaterized procedure that requires only to be matched against the goal in
some standard way to yield a complete solution.  Examples of this sort of
thing would be retrieval of a fixed procedure for traveling to one's regular
place of work, or of parameterized procedures for phone-dialling or scissor-
manipulating procedures.  Such procedure-finding, being a limiting case,
hardly seems to qualify as problem solving.  Not QUITE so easy is the task
of planning a routine cross-country plane trip; what is involved here is the
retrieval of an abstract plan schema (c.) that matches the goal, and the
refinement of this schema with the aid of a well-developed set of plan
refinement heuristics (d.).  Somewhat more demanding is the planning of a
non-trivial car repair by a mechanically adept person who is inexperienced
in the particular kind of repair job involved.  Such a person will lack an
appropriate plan schema (c.), and may even have an underdeveloped set of
heuristics (d), and so may decide to consult a suitably experienced person
for advice before attempting the job.  However, such a person will not be
totally at a loss if no advice is available to him and he has to proceed
without it.
------------------------------
[the following are undeveloped rough notes]

and may even lack a good abstraction scheme for the type of job in question 

[discuss the tradeoffs here;  a priori analysis vs. experience vs.
advice-taking as ways of acquiring this knowledge]

... into its own internal lingua franca -- 

we start with a problem to solve, and go from there ...
...  the initial choice between planning vs. finding a procedure vs. real-time
	execution mode(?)

  1.  There will be different degrees of creativity in problem solving,
      reflecting the degree to which the solution process can make use of
      previously stored heuristic knowledge.

Routine problem solving uses parameterized stored plans ...  
(relate to Stefik vs. Friedland)

  2.  Direct retrieval from permanent memory of stored heuristic 
      planning knowledge (of some of the types a. - f. mentioned above)

      [relate to JMC's search for stored plans in '59 paper]
   V.  Toward a General, Human-Like, Artificial Intelligence

                               This page last revised:   28 Mar 1980

The basic idea here is that a general intelligence is a structured,
controlled, community of specialized problem solvers, each with its own
special knowledge and heuristics, and each able to speak a logical <<lingua
franca>, at least to the extent required to effectively transact business
with the community at large.  Our approach to generality was described above
in Section IV.A, and the organizational structure of the component problem
solvers will be dealt with in Section VIII; here we shall be concerned with
the top-level control structure and motivational structure of a human-like
general intelligence.

Two pervasive characteristics of human goal-seeking activity are the
multiplicity and diversity of purposes served by actions typically
undertaken, and the central role of time in the structuring of tasks and the
allocation of available resources.  Though the mental abilities underlying
these characteristics are of great importance in intelligent decision
making, little effort has thus far been devoted by AI researchers to the
task of endowing automatic problem solving systems with such abilities.  Our
proposed problem-solving system is intended to provide (among other things)
a suitable framework within which to address these issues.

   V.A   The Motivational Dependency Structure

When a person seeks a goal, he usually has some reason for doing so.
Frequently, the goal subserves some higher-level purpose which the person
has.  This is analogous to an AI problem-solving program attempting a
sub-goal that it has created in the pursuit of some higher-level goal.
Similarly, when a person applies a constraint in the course of goal-seeking,
there is usually some higher-level reason or consideration underlying his
application of the constraint.  We propose to make these motivational
dependency relationships explicit in a Motivational Dependency Structure
(MDS), a directed graph with nodes representing goals, constraints, etc.,
and arcs representing dependency links among the node items.  Each goal and
constraint is to be connected to those higher-level factors which provide
some motivation or reason for it.  These higher-level factors are themselves
to be linked to still more fundamental sources of motivation on which they
depend, and so on, until the primitive motivational wellsprings are reached
(in humans, these wellsprings include primary drives and aversions,
fundamental needs and values, and genetically determined behavior patterns).

   V.B   The Natural History of Goals

[explain the different ways in which tasks and goals may originate;
subgoal generation vs. mere noticing of desirable states;
augmentation of MDS by noting new motivational support relationships;
distinguish pre-scheduling status from post-scheduling status.]

   V.C   A Time-Driven Agenda System

  VI.  The Recognition and Analysis of Human Beliefs and Intentions

                               This page last revised:   21 Mar 1980

The recognition and analysis of human beliefs and intentions is a
capability that is fundamental to military intelligence analysis.
We believe that such a capability is best based on a Fregean
representation of propositional attitudes, coupled with a simulative
approach to reasoning about such attitudes, an approach we have sketched
in [Creary 1979].  (On the representational side, see also [McCarthy 1979].)

In simulative reasoning about the beliefs, desires, intentions, etc., of an
intelligent organism, it is not enough to model just the propositional
attitudes of the organism; it important also to model the thinking and
problem-solving processes of the organism.  In [Creary 1979] we showed how
the simulator's own inferential processes could be used to investigate the
beliefs of another organism.  In [Genesereth 1978, 1979] it is shown how a
procedural model of a person's problem-solving processes can be used by a
program to investigate the person's beliefs, plans, and intentions.

More generally, we think that it is important for the investigation of an
organism's beliefs and intentions to be able to model a large part of the
overall mental apparatus of the organism in question, including its overall
motivational system and much of its total knowledge structure.  This is a
very tall order.  Nevertheless, we have described above in Sections IV - VII
what we believe to be a realistic and workable approach to the development
of such a sophisticated modelling capability.
 VII.  Advice Taking and Advice Seeking: Theory, and Applications to
        Knowledge Acquisition and to Program Development and Maintenance

                               This page last revised:   19 Mar 1980

  VI.A   Advice Taking and Advice Seeking: Theory

  VI.A.1   The Need for Advice Seeking

According to McCarthy's 1959 conception of it, advice taking is a form of
goal-directed information transfer.  The goal is to improve the performance
of the advice taker by giving it information (advice), in a manner that
requires of the advice giver "little if any knowledge of the program or the
previous knowledge of the advice taker".  One way of attempting to do this
would be merely to have the advice giver "instruct" the advice taker by
giving it a specific set of improvement goals and some relevant factual
information; this appears to be what is proposed in JMC's 1959 design sketch
for the advice taker [McCarthy 1959].  However, this may not be the optimal
way of achieving the initial goal; there will probably be more efficient
ways of conveniently transferring the information required to improve
performance (assuming that the advice giver has such information).  There
are two main reasons for this.

In the first place, if the advice giver really does know little about the
previous knowledge and program of the advice taker, it is unlikely that he
will be very good at guessing what the relevant knowledge is -- what
knowledge the advice taker needs in order to improve its performance in the
desired way.  This suggests that an interactive exchange is needed between
the advice giver and taker, in which the giver first sets the improvement
goals, and then the taker determines what new knowledge, if any, it needs to
be able to make the specified improvements with a reasonable amount of
effort.  Next, the advice taker may ask the advice giver for advice of a
specific sort, as a means of obtaining the requisite new knowledge.  If
appropriate advice is then given, the advice taker takes it and tries to
make the desired improvements.  We propose that this be the standard style
of advice taking for the new problem solver.  The basis for the advice
taker's determination of the knowledge it needs to make a given improvement
has two major parts:  the organizational scheme for heuristic knowledge
discussed above in Section IV.B, and a body of specialized meta-knowledge
indicating the types of knowledge (of whatever kind) that are needed in
order to perform different types of tasks at various levels of proficiency.

  VI.A.2   The Need for Heuristic Advice

The second reason why simple factual "instruction" is not always an optimal
mode of program advising is that purely factual advice will not always be
the sort of advice that is most appropriate or needed in a given set of
circumstances.  For example, in some cases what the advice taker will most
need to know in in order to make the desired improvement is a certain piece
(or pieces) of heuristic knowledge, which might be most naturally expressed
as (say) a conditional imperative or in some other partly procedural form.
While it may often be appropriate for the advice giver to offer some
explanatory factual information along with the needed procedural heuristics,
in other cases, just the heuristics alone will be most appropriate.  An
ideal advice-taker would be able to take advice in the form in which it is
most conveniently offered by a human or other expert, and transform it into
the most efficient form for its own use.  In practice, we hope to make it a
little easier on the program by using the request for advice to let the
advice giver know in what form the needed information would be most easily
assimilated by the program.  Nevertheless, we do expect to deal with the
problem of knowledge transformation.

  VI.A.3   The Dependency of Successful Advice Taking on the Ability to
	    Make Effective Use of Knowledge

Common sense involves the ability of an organism to integrate both facts,
and relatively abstract norms and procedural rules of thumb, into its
detailed procedures for dealing with the world.  It is on this fundamental
ability to make effective procedural use of general knowledge for special
purposes that the capability to take advice chiefly depends.  (By "general
knowledge," we here mean that which is applicable to more than a single
case; of course, this admits of degrees.)  This fundamental ability may take
one of two different forms:  i) the ability of an organism's special-purpose
procedures to "interpret" general-purpose knowledge directly, without
transforming it, and ii) the ability of an organism to compile general-
purpose knowledge into special-purpose procedures.

Advice seeking and taking is just one of the many techniques by which a
generally intelligent organism may gain its knowledge.  By whatever means
and in whatever form knowledge is acquired, it must first be integrated
into an appropriate memory structure, where it will then be available for
use by whatever procedures may require it, including those that may
compile it into a more specialized form.  Thus, once a program is capable
of taking advice of a certain kind in a non-trivial way, it should also be
able to make effective use of that same kind of knowledge when it is
gained in other ways.  By the same token, if a program cannot make
appropriate procedural use of a certain kind of knowledge, it will also be
incapable of taking knowledge of that kind as advice.

Modularity of design of component problem solvers is to be one key to
advisability of our proposed program.  This modularity will be greatly
facilitated by the structure of the high-level formalism in which the
problem solvers are to be written (see Section VIII).  As we indicated
above, two other keys to advisability are to be a well-structured scheme
for representation and use of specialized domain knowledge, and a related
body of specialized meta-knowledge.

  VI.B   Advice Taking and Advice Seeking: Applications to Knowledge
          Acquisition, and to Program Development and Maintenance

A powerful advice-taking capability would be of great potential value in at
least two very important areas of practical application:

       i) knowledge acquisition for intelligent computer systems

It is well-known that the problem of knowledge acquisition is a major
bottleneck in the construction of knowledge-intensive problem-solving
programs.  [Contrast our proposed style of advice taking with other current
approaches to knowledge acquisition]

      ii) semi-automatic incremental development and maintenance of
	  intelligent computer systems

Our study of advice taking should in time lead to a practical means of
restructuring the problem solver for experimental purposes.  Such a
development will be expedited by a conscious effort to examine
retrospectively each episode in which the system is reprogrammed by hand, to
determine how the system could have been made to restructure itself in the
desired way on the basis of appropriate advice.  In this way a researcher
will be able to utilize a fresh set of intuitions about his own information
processing to advance his understanding of the advice-taking problem.  The
problem solver itself thus will serve as a real-world task environment to
stimulate and test developments in the theory of advice taking.  (We expect
that programming in the agenda-based rule formalism of Section VIII, whether
automated or human, will have quite a different flavor from programming in,
say, LISP.)

Given the design ideas for an advice taker being developed in this document,
the practical considerations of this subsection alone would appear to
justify a strong research effort in this area, even independently of the
great theoretical appeal of such an effort.

VIII.  The Organizational Structure of Component Problem Solvers

                               This page last revised:   28 Mar 1980

We propose to organize our component problem solvers along lines suggested
by Mark Stefik in his thesis [Stefik 1980]; he has creatively synthesized
and generalized a number of good ideas in the literature.  The discussion
below reflects some significant adaptations and extensions of Stefik's ideas
that we are making for use within the advice-taker framework.  The basic
conception is that a problem-solving program is an unordered set of
interruptible and restartable problem-solving operators (n-ary procedures,
n≥0) that are executed appropriately by a control program or interpreter.
In addition to a body of code and a pointer to an interpreter for it, there
is associated with each operator a structured body of descriptive
information concerning the operator, together, perhaps, with some
specialized procedures that utilize this information.  The information may
include such things as conditions of applicability, special application
heuristics, relative priority of execution, effects of execution, special
advantages or weak points, and any other information that may be helpful to
the interpreter in executing the operator.  Thus conceived, our (operator +
information) modules are quite similar to the "pattern-directed modules" of
[Hayes-Roth, Waterman, & Lenat 1977], with the main difference being the
emphasis that we place on factoring as much specialized declarative
information as possible out of all procedures associated with the modules.
This emphasis makes it possible to extract some of the general procedural
content of the PDMs, and place it in the control structure, where it seems
more properly to belong.

The framework is to be one of controlled, rather than heterarchical,
cooperation.  [expand this idea to a full paragraph]   

The control structure is a generalization of typical production rule
schemes, and involves two main aspects: agenda construction, and agenda
execution.  In agenda construction some of the available operators are
singled out as relevant at the current stage of problem solving, partly on
the basis of comparisons of active goals with operator descriptions.  It is
then determined which of these candidate operators is applicable in the
current problem situation, and where.  The processes of initial selection
and applicability testing will make use of observations of the problem
situation, and the special information and procedures associated with each
operator (or appropriate abstractions thereof). These constructive processes
will yield an agenda of tasks (i.e., operators with specific argument
bindings derived from the problem environment).  In agenda execution, the
agenda (which will in general contain both virgin and interrupted tasks) is
examined and some of the tasks on it are executed, restarted, or pruned in
an order deemed appropriate to the current problem situation.  The selection
and winnowing processes are informed and aided by observations of the
problem situation and by the special information and procedures stored with
each operator.  Agenda construction and agenda execution are interwoven in a
way appropriate to the given problem type until a solution is obtained, or
until a specified stopping condition is satisfied.  Except for the important
difference in the character of the pattern-directed modules noted at the end
of the preceding paragraph, both our overall system and its component
problem solvers satisfy the general definition of a "pattern-directed
inference system" offered in [Hayes-Roth, Waterman, & Lenat 1977] (p. 579),
and are very much in the spirit of PDISs as discussed by those authors.

[compare and contrast this design with that of Lenat's "beings" and Hewitt's
"actors".]

The control program of a given component problem solver is specialized for
the particular type of problem to be solved, and in general is itself a
problem solver organized along the lines just described.  Thus there is a
potential hierarchy of control, with all interpreters except the topmost one
being decomposed into a set of unordered operators and an interpreter for
them.  The topmost interpreter is an atomic operator or procedure that is
sufficiently simple and general as to never require modification by advice
or learning.  Stefik's experiment planner has two levels of control above
the basic planning operators, and we anticipate using no more than three or
four levels in our most complex component problem solvers.

It is obvious that the programming framework just outlined permits a great
deal of modularity in program design.  What is most significant about the
framework for the purposes of advice-taking, however, is the KIND of
modularity it permits.  Let us compare two versions of a problem-solving
interpreter:  one coded entirely in LISP, and the other decomposed in
accordance with the preceding paragraph, with the new operators and new
top-level interpreter remaining in LISP.  Note that the decomposition does
not merely factor a fairly large program into a number of smaller ones; a
most important product of the decomposition in the present context is the
DECLARATIVE information that describes the resulting operators, and which is
factored OUT of the procedural code entirely.  It is the declarative component
of the decomposition products that we wish to maximize, since the greater the
proportion of declarative control information in the interpreter's program,
the easier it should be to modify the program through the medium of advice.
Thus, we consider one of the major virtues of the programming style proposed
above to be its potential for maximizing the declarative content of
procedural specifications within a natural and potentially efficient
framework.

As a long-term goal, we propose to use the programming formalism outlined
above to code as many as possible of the various procedures associated with
our component problem solvers, including many of the operators that form
their components.  This will maximize the modularity of the total system by
factoring the maximum amount of declarative information out of all the
component procedures, leaving as atomic procedures to be coded directly in
LISP only some very low-level procedures and the topmost interpreters.  Of
course, it is quite unlikely that all the procedures thus modularized will
require very complex interpreters of the sort described above for our
problem solvers.  We expect that many of the lower-level procedures in
question can be modularized using rather simple table- and parameter-driven
interpreters.  We view our planned effort to maximize declarative modularity
as an empirical determination of the practical limits of such modularity.
  IX.  The Frame and Qualification Problems

                               This page last revised:   30 Mar 1980

In planning a course of action, it is often appropriate to attempt to
predict the likely results of various alternatives that are available.
When the subtask thus arises of inferring the problem-relevant aspects
of Result(A, S), the situation that would result from the performance of
action A in a given situation S, two different kinds of inference must be
made:

  i) The inference of those problem-relevant aspects or parameters of the
situation S that would change, together with their new values (we shall call
these the CAUSATIVE inferences), and

  ii) The inference of those problem-relevant aspects or parameters of the
situation S that would remain unchanged (we shall call these the CONSERVATIVE
inferences).  

For example, consider Jones, who plans to get in touch with a friend by
first looking up the friend's number in a phone book, and then dialling that
number into an appropriate telephone.  The result Jones expects to achieve
by looking in the phone book is (at least temporary) knowledge of his
friend's phone number; thus, he expects that the execution of his lookup
procedure will not cause his friend's phone number to change.  Further, he
expects that his act of knowledge acquisition will not affect the working
condition of the telephone system, or the disposition of his friend to
answer the telephone when it rings.  Indeed, his achievement of the intended
effect of the second step in his plan (i.e., his achievement of voice contact
with his friend by dialling the friend's phone number) will ordinarily
depend on the truth of these negative expectations.  Now suppose that Jones
was relatively unfamiliar with telephoning and related lookup procedures
when he formulated his plan.  Then his causal expectations, described above,
may well be based on inferences explicitly (though perhaps unconsciously)
made during his planning process.  If this is so, then his expectation of
knowing his friend's phone number is based on what we earlier called a
causative inference, while his expectation of talking by telephone with
with his friend is based on that same causative inference, combined with
several conservative inferences and a further causative inference.

If an epistemologically inclined analytic philosopher were for some reason
to consider planning situations of the sort just discussed, he would
recognize in them two problems to which he could immediately address himself
as an epistemologist:  namely, those of explicating the nature and grounds
of the two kinds of inference involved -- causative and conservative.  To
solve these problems, he would have to provide precise formulations of all
of the premise types and rules of inference involved, and would also have to
justify the rules thus formulated by showing that they were in accord with
sound epistemological theory.  Moreover, since the inferences in question
are in effect conditional "proofs" supporting subjunctive conditional
statements, and since the inferences are patently of a causal nature, a
sound justifying epistemological theory will have to fit together in the
right way with a sound metaphysical theory concerning the nature of
causation and the semantics of subjunctive conditionals.  Thus, the
epistemological tasks of explicating causative and conservative planning
inferences have some significant metaphysical overtones.

These same problems in the epistemology of causal/subjunctive inference
arise in a particularly concrete and pressing form within artificial
intelligence, since AI researchers wish to construct computer programs that
plan courses of action in a sound and intelligent way.  To do this, they
must build into their programs precise reasoning algorithms for causative
and conservative planning inferences, and they are naturally concerned that
these algorithms be sound and reasonable, as well as efficient.  Two
specific sub-problems that have arisen in this connection are the frame
problem and the qualification problem (both problems were first identified
in [McCarthy & Hayes 1969]).

The frame problem is concerned with the explication of what we have called
conservative inferences.  Historically, the problem has been that of
deciding to what extent conservative inferences should be regarded as
analogues of causative inferences, which evidently are based on positive
domain knowledge (e.g., phone book lookup normally results in knowledge of
the phone number sought), and to what extent they should be regarded instead
as some sort of default or presumptive reasoning that is based on a LACK of
any known (or perhaps, knowable) domain information to the contrary.  At
first, there were a number of attempts to take the former approach, casting
conservative inferences as deductions from axioms expressing such facts as
that phone book lookup normally does not change peoples' telephone numbers,
or their readiness to answer their telephones, or their state of possession
of a telephone, etc., etc.  However, this approach led to excessive numbers
of excessively complicated axioms that had to be either explicitly stored in
or themselves deduced from a general knowledge base.  More recently, there
has been a general trend toward the second approach.  McCarthy, for example,
now regards conservative inferences as plausible but fallible 
(i.e., non-deductive) conjectures, which, in McCarthy's case, are to be
performed in accordance with suitable circumscription schemata and
substitution predicates.  A somewhat different approach of the second
general sort will soon be proposed by Creary; this proposal will also deal
with some aspects of the qualification problem, which we shall now discuss.

The qualification problem has a number of different manifestations, two of
which are relevant to the present discussion.  The first of these concerns
what we call causative inferences, and the fact that, while generally
reliable, they still can "go wrong" in an almost infinite number of ways
(e.g., an expectation of successful phone book lookup, based on causative
inference, may turn out false because the phone number sought is either
incorrectly listed or illegible, or because the page containing that number is
missing, or because the lights suddenly go out, or because the seeker's visual
system suddenly fails, or ...).  The basic problem here is that the action A
(or its developing causal effects) may encounter in the environment being
modeled (in a situation S' located temporally between S and Result(A, S) ) a
condition PC which prevents A from having its causatively inferred effect, and
which was not taken into account in the relevant causative inference.  This
preventing condition may already have obtained in the situation S (e.g., the
illegibility of the phone listing sought), or it may be the result of an
extraneous event occurring between Time(S) and Time(Result(A, S)) (e.g., the
failure of a necessary lighting system).

The maker of such a failed causative inference may or may not be
epistemically culpable, depending on whether, at the time of the inference,
he had any reasonable way of anticipating the occurrence of the condition PC
and its influence.  For example, the maker of a causative inference might,
through inattention or carelessness, culpably overlook a relevant fact well
known to him, and this might cause his plan to fail.  On the other hand, an
unsuccessful causative reasoner might have had no way whatever of
anticipating the condition that led to the failure of his inference.  And,
of course, there will be borderline cases, in which failure could have been
avoided on the basis of available knowledge, but only by exercising a degree
of care and thoroughness in causative inference that may not have been
justified by the circumstances.

The causative inference process should be reasonably sensitive to the
various possible preventing conditions for an action or event, without being
overwhelmed by them.  Clearly, it is hopeless to try to take into account
all the possibilities that might in one case or another be anticipated, by
adding sufficiently many explicit qualifications to stored general
principles (phone book lookup normally works unless the number sought is
illegible, or ... or ... or ... ...).  If general predictive knowledge is to
be usefully qualified, it would seem that the qualifications will have to be
broad provisos of the "ceterus paribus" or "under normal or standard
conditions" sort, and not long lists of specific exceptions.  We want our
causative inference process to be capable of bringing relevant general
information and particular facts to bear when that is appropriate, without
becoming bogged down in a search through endless lists of specific
qualifications to general principles, and without having otherwise to
complicate excessively the statement of causal principles with explicit
references to large numbers of other similar principles.  What is needed is:

  i) a relatively modular declarative representation of principles and facts,
 ii) a good retrieval scheme for locating in memory all the individual pieces
     of knowledge that are relevant to a given causal reasoning problem, and 
iii) a set of rules for causative inference to govern the process of putting
     together or composing all available relevant knowledge to obtain an
     appropriate conclusion.  

The tasks of developing i) and iii) belong to what McCarthy calls the
epistemology of artificial intelligence, while the task of developing ii) is
a problem in what he calls the heuristics of AI.

Conservative inferences are subject to a sort of mirror image of the
qualification problem for causative inferences, though this fact has not
been as much noted in the literature (perhaps it underlies McCarthy's
suspicion that the frame problem is a special case of the qualification
problem).  Here, the basic difficulty is that either our own actions or some
extraneous events may have problem-relevant effects beyond those that we
anticipate, thereby falsifying some of the conclusions of our conservative
inferences (our phone book lookup procedure may, through some unexpected
side-chain of causal effects, incapacitate the telephone that we intend to
use, or a lightning bolt may, while phone book lookup is proceeding, destroy
the local telephone power supply).  As in the case of causative inference,
some of these possible failure modes will be easier to anticipate than
others.  Here again, we want our inference process to be reasonably
sensitive to such possibilities, without being overwhelmed by them.

On the basis of our discussion thus far, it might seem that the frame and
qualification problems are peculiar to common-sense planning inferences,
where there is evidently a heavy reliance on imprecise knowledge and rough
rules of thumb.  The epistemological situation might seem to be quite
different in areas where scientific knowledge in the form of experimentally
established laws is available and applicable for the purposes of planning
predictions.  However, this appears not to be the case.  Let us first
consider planning problems in which use might be made of knowledge from the
"soft" sciences, such as sociology, political science, and economics, which
appear to have more in common with common sense than do the "hard" sciences.
The inferential problems encountered in planning an anti-drug-abuse program
or a presidential campaign or a government anti-inflation policy, while more
complex than those discussed above, do not seem to be different from them in
kind or principle.  Here, too, there are causative and conservative
inferences to be made, and the epistemological problems of explicating them
to be solved.  Moreover, the scientific laws that are available for these
planning purposes appear to require the same types of "ceterus paribus" or
"standard conditions" provisos as were discussed earlier in connection with
the principles used in planning a telephone call.

What, then, of planning domains in which use can be made of knowledge from
the relatively "hard" sciences, such as biology, chemistry, engineering
mechanics, and physics?  Is the epistemology of planning a particular
coronary bypass operation, or a new chemical manufacturing process, or a
construction schedule for a new museum, or a NASA space mission, really so
radically different from that which has already been discussed?  It would
seem not.  However, it would be wrong to think that there are no differences
at all introduced by the use of knowledge from, say, physics.  For here it
is possible, at least some of the time, to make predictive use of
fundamental laws of nature, thereby escaping to some extent the limitations
imposed by the "ceteris paribus" or "normal conditions" clauses that were
found to pervade prediction in other domains.  It will be of some interest
to determine what improvements in the predictive process are permitted by
the use of such laws, as compared with the use of laws of the qualified and
less exact type discussed earlier.

[to be continued]

Finally, we must observe that the frame and qualification problems are not
limited in their scope merely to planning inferences.  Rather, they occur in
connection with any sort of causal/subjunctive prediction or retrodiction.
Thus, for example, they are relevant to the epistemology of such things as
retrospective mechanical failure analysis, and the determination of moral
and legal responsibility (What caused the Squaw Valley cable car accident to
occur, and who is morally and/or legally responsible?  What caused the
thalidomide tragedies, and who should have been held responsible?  What
caused the Three Mile Island nuclear accident ... ?).
   X.  The First Step:  A Knowledge-Based, Hierarchical Travel Planner

                               This page last revised:   20 Mar 1980

In our initial implementation effort, we propose to construct a knowledge-
based, hierarchical planner that deals with an expanded version of the
problem domain of the original advice taker design -- travel planning and
related communications planning.  (A proposed first extension and
generalizaion of this initial program will be discussed in the next
section.)  We want a program that is capable of planning trips with a fairly
wide variety of starting places and destinations, while taking account of a
fairly wide variety of possible transportation modes and relevant
constraints.  We expect it to be able to plan for the availability of
knowledge, to know whether a given piece of information should be obtained
or imparted in advance of or during a given trip, and to be able to plan
telephone and mail communications in order to obtain and impart appropriate
information and other materials related to its travel planning, such as
accommodation rates, passports and visas, etc.  This will require the
temporal coordination of actions to be executed concurrently, and some
ability to model the knowledge and perhaps the intentions of people.

Though the detailed design of this program remains to be done, we expect
that the initial version will make use of a number of ideas (suitably
generalized and "logicized") from the recent Ph.D. theses of Mark Stefik
and Peter Friedland [Stefik 1980], [Friedland 1979].  For example, we intend
to bring Stefik's constraint-propagation techniques for hierarchical
planning into a logical framework that will permit a more systematic
approach to the reasoning involved in such techniques.  It may even be
possible to ease our initial implementation effort by adapting some of
Stefik's software for our purposes.

Because of a partial overlapping of task domains and some other superficial
similarities, it may be inquired whether our proposed travel and
communications planner is related in any significant way to the GUS travel
planner of [Bobrow et al. 1977].  In fact, it is not; the objectives of the
two programs are quite different.  The emphasis in the GUS project was on
natural language understanding; in the paper just cited the investigators
were mainly concerned to test the adequacy of a certain type of modular,
frame-oriented program organization to handle the problems of interactive
natural dialog in English.  The travel-planning aspect of the program was
merely a convenient way of providing a relatively narrow, task-oriented
focus for the natural-language dialog.  On the other hand, our proposed
travel planner is intended as a first step in the development of a general,
knowledge-based problem-solving system with the objectives enumerated above
in Section I.  Accordingly, we intend to abstract for the most part in our
research from the problems of natural language communication.

  XI.  Extension to an Advice-Taking Travel Planner

                               This page last revised:   29 Mar 1980

We propose to expand the initial hierarchical travel planner to permit it to
operate with different amounts of heuristic knowledge, to determine what
sort of knowledge would permit it to do a better job of planning, to request
appropriate advice on the basis of such determinations, and to make
effective use of such advice when it is offered.  This will require the
addition of a specialized problem-solving module for advice-seeking/taking.
Of course, all extensions (as well as the original core travel planner) are
to conform with the design ideas discussed earlier in this report.
 XII.  References

[Bobrow et al. 1977]
Bobrow, D. G., Kaplan, R. M., Kay, M., Norman, D. A., Thompsn, H. and
Winograd, T., "GUS, A Frame-Driven Dialog System," 
Artificial Intelligence 8 (1977), 155-173.

[Creary 1979]
Creary, L. G., "Propositional Attitudes: Fregean Representation and
Simulative Reasoning," Proceedings of the Sixth International Joint
Conference on Artificial Intelligence, Tokyo, Japan, August 1979.

[Ernst & Newell 1969]
Ernst, G. W., and Newell, A., <GPS: A Case Study in Generality and Problem
Solving>, New York, Academic Press, 1969.

[Feigenbaum, Buchanan, & Lederberg 1971]
Feigenbaum, E. A., Buchanan, B. G., & Lederberg, J., "On Generality and
Problem Solving: A Case Study Using the DENDRAL Program,"
<Machine Intelligence> 6, Meltzer, B. and Michie, D., (eds.),
Edinburgh, Edinburgh University Press, 1971.

[Friedland 1979]
Friedland, P., <Knowledge-Based Hierarchical Planning in Molecular Genetics>,
Ph. D. Dissertation, Computer Science Department, Stanford University, 1979.
Reprinted as Stanford Computer Science Department Report No. STAN-CS-79-771,
1979.

[Genesereth 1978]
Genesereth, M., <Automated Consultation for Complex Computer Systems>,
Ph. D. Dissertation, Division of Applied Sciences, Harvard University,
September 1978.

[Genesereth 1979]
Genesereth, M., "The Role of Plans in Automated Consultation," Proceedings
of the Sixth International Joint Conference on Artificial Intelligence,
Tokyo, Japan, August 1979, pp. 311-319.

[Hayes-Roth, Waterman, & Lenat 1978]
Hayes-Roth, F., Waterman, D. A., and Lenat, D., "Principles of Pattern-
Directed Inference Systems," in <Pattern-Directed Inference Systems>,
Waterman, D., and Hayes-Roth, F. eds., New York, Academic Press, 1978.

[McCarthy 1959] 
McCarthy, J., "Programs with Common Sense", in <Mechanisation of Thought
Processes> (proceedings of a symposium), Teddington, England, National
Physical Laboratory, 1959.  Reprinted in M. Minsky (ed.), <Semantic
Information Processing>, Cambridge, Mass., The MIT Press, 1968.

[McCarthy 1977]
McCarthy, J., "Epistemological Problems of Artificial Intelligence",
Proceedings of the 5th International Joint Conference on Artificial
Intelligence, (1977) (Invited Paper).

[McCarthy 1979] 
McCarthy, J., <First Order Theories of Individual Concepts and
Propositions>, Stanford AI Memo AIM-325 (March 1979).  Reprinted in
<Machine Intelligence> 9, edited by J. E. Hayes, D. Michie, and
L. I.  Mikulich, New York, John Wiley, 1979.

[McCarthy & Hayes 1969]  
McCarthy, J. and Hayes, P., "Some Philosophical Problems from the Standpoint
of Artificial Intelligence", Artificial Intelligence Laboratory Memo AIM-73,
Stanford University, November 1968.  Reprinted in D. Michie (ed.), 
<Machine Intelligence> 4, New York, American Elsevier, 1969.

[McDermott 1979]
McDermott, J., "Learning to Use Analogies," Proceedings of the Sixth
International Joint Conference on Artificial Intelligence, Tokyo, Japan,
August 1979, 568-576.

[Moore & Newell 1973]
Moore, J., and Newell, A., "How Can Merlin Understand," in <<Knowledge
and Cognition>, edited by L. Gregg, Lawrence Erlbaum Associates, 1973.

[Newell 1969]
Newell, A., "Heuristic Programming: Ill-Structured Problems," in
<Progress in Operations Research>, Vol. III, edited by J. Aronofsky,
New York, Wiley, 1969.

[Newell 1973]
Newell, A., "Artificial Intelligence and the Concept of Mind," in
<Computer Models of Thought and Language>, edited by R. Schank and
K. Colby, San Francisco, W. H. Freeman and Co., 1973.

[Newell, Shaw, & Simon 1960]
Newell, A., Shaw, J. C., and Simon, H. A., "Report on a General Problem-
Solving Program for a Computer," <Information Processing:  Proceedings of the
International Conference on Information Processing>, Paris, UNESCO, 1960.
 
[Newell & Simon 1972]
Newell, A., and Simon, H. A., <Human Problem Solving>, Englewood Cliffs, N. J., 
Prentice-Hall, Inc., 1972.

[Stefik 1980]
Stefik, M., <Planning with Constraints>, Ph. D. Dissertation, Computer
Science Department, Stanford University, 1980.  Reprinted as Stanford
Computer Science Department Report No. STAN-CS-80-784, January 1980.